236 research outputs found

    Society-in-the-Loop: Programming the Algorithmic Social Contract

    Full text link
    Recent rapid advances in Artificial Intelligence (AI) and Machine Learning have raised many questions about the regulatory and governance mechanisms for autonomous machines. Many commentators, scholars, and policy-makers now call for ensuring that algorithms governing our lives are transparent, fair, and accountable. Here, I propose a conceptual framework for the regulation of AI and algorithmic systems. I argue that we need tools to program, debug and maintain an algorithmic social contract, a pact between various human stakeholders, mediated by machines. To achieve this, we can adapt the concept of human-in-the-loop (HITL) from the fields of modeling and simulation, and interactive machine learning. In particular, I propose an agenda I call society-in-the-loop (SITL), which combines the HITL control paradigm with mechanisms for negotiating the values of various stakeholders affected by AI systems, and monitoring compliance with the agreement. In short, `SITL = HITL + Social Contract.'Comment: (in press), Ethics of Information Technology, 201

    Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm

    Full text link
    NLP tasks are often limited by scarcity of manually annotated data. In social media sentiment analysis and related tasks, researchers have therefore used binarized emoticons and specific hashtags as forms of distant supervision. Our paper shows that by extending the distant supervision to a more diverse set of noisy labels, the models can learn richer representations. Through emoji prediction on a dataset of 1246 million tweets containing one of 64 common emojis we obtain state-of-the-art performance on 8 benchmark datasets within sentiment, emotion and sarcasm detection using a single pretrained model. Our analyses confirm that the diversity of our emotional labels yield a performance improvement over previous distant supervision approaches.Comment: Accepted at EMNLP 2017. Please include EMNLP in any citations. Minor changes from the EMNLP camera-ready version. 9 pages + references and supplementary materia

    Pareto Optimality and Strategy Proofness in Group Argument Evaluation (Extended Version)

    Get PDF
    An inconsistent knowledge base can be abstracted as a set of arguments and a defeat relation among them. There can be more than one consistent way to evaluate such an argumentation graph. Collective argument evaluation is the problem of aggregating the opinions of multiple agents on how a given set of arguments should be evaluated. It is crucial not only to ensure that the outcome is logically consistent, but also satisfies measures of social optimality and immunity to strategic manipulation. This is because agents have their individual preferences about what the outcome ought to be. In the current paper, we analyze three previously introduced argument-based aggregation operators with respect to Pareto optimality and strategy proofness under different general classes of agent preferences. We highlight fundamental trade-offs between strategic manipulability and social optimality on one hand, and classical logical criteria on the other. Our results motivate further investigation into the relationship between social choice and argumentation theory. The results are also relevant for choosing an appropriate aggregation operator given the criteria that are considered more important, as well as the nature of agents' preferences

    Inducing Peer Pressure to Promote Cooperation

    Get PDF
    Cooperation in a large society of self-interested individuals is notoriously difficult to achieve when the externality of one individual's action is spread thin and wide on the whole society. This leads to the ‘tragedy of the commons’ in which rational action will ultimately make everyone worse-off. Traditional policies to promote cooperation involve Pigouvian taxation or subsidies that make individuals internalize the externality they incur. We introduce a new approach to achieving global cooperation by localizing externalities to one's peers in a social network, thus leveraging the power of peer-pressure to regulate behavior. The mechanism relies on a joint model of externalities and peer-pressure. Surprisingly, this mechanism can require a lower budget to operate than the Pigouvian mechanism, even when accounting for the social cost of peer pressure. Even when the available budget is very low, the social mechanisms achieve greater improvement in the outcome.Martin Family Fellowship for SustainabilityU.S. Army Research Laboratory (Cooperative Agreement W911NF-09-2-0053)United States. Air Force Office of Scientific Research (Award FA9550-10-1-0122

    Learning in Repeated Games: Human Versus Machine

    Full text link
    While Artificial Intelligence has successfully outperformed humans in complex combinatorial games (such as chess and checkers), humans have retained their supremacy in social interactions that require intuition and adaptation, such as cooperation and coordination games. Despite significant advances in learning algorithms, most algorithms adapt at times scales which are not relevant for interactions with humans, and therefore the advances in AI on this front have remained of a more theoretical nature. This has also hindered the experimental evaluation of how these algorithms perform against humans, as the length of experiments needed to evaluate them is beyond what humans are reasonably expected to endure (max 100 repetitions). This scenario is rapidly changing, as recent algorithms are able to converge to their functional regimes in shorter time-scales. Additionally, this shift opens up possibilities for experimental investigation: where do humans stand compared with these new algorithms? We evaluate humans experimentally against a representative element of these fast-converging algorithms. Our results indicate that the performance of at least one of these algorithms is comparable to, and even exceeds, the performance of people

    Small cities face greater impact from automation

    Full text link
    The city has proven to be the most successful form of human agglomeration and provides wide employment opportunities for its dwellers. As advances in robotics and artificial intelligence revive concerns about the impact of automation on jobs, a question looms: How will automation affect employment in cities? Here, we provide a comparative picture of the impact of automation across U.S. urban areas. Small cities will undertake greater adjustments, such as worker displacement and job content substitutions. We demonstrate that large cities exhibit increased occupational and skill specialization due to increased abundance of managerial and technical professions. These occupations are not easily automatable, and, thus, reduce the potential impact of automation in large cities. Our results pass several robustness checks including potential errors in the estimation of occupational automation and sub-sampling of occupations. Our study provides the first empirical law connecting two societal forces: urban agglomeration and automation's impact on employment
    • …
    corecore